Search results for "bandit problems"
showing 2 items of 2 documents
Generalized Bayesian pursuit: A novel scheme for multi-armed Bernoulli bandit problems
2011
Published version of a chapter in the book: IFIP Advances in Information and Communication Technology. Also available from the publisher at: http;//dx.doi.org/10.1007/978-3-642-23960-1_16 In the last decades, a myriad of approaches to the multi-armed bandit problem have appeared in several different fields. The current top performing algorithms from the field of Learning Automata reside in the Pursuit family, while UCB-Tuned and the ε -greedy class of algorithms can be seen as state-of-the-art regret minimizing algorithms. Recently, however, the Bayesian Learning Automaton (BLA) outperformed all of these, and other schemes, in a wide range of experiments. Although seemingly incompatible, in…
Accelerated Bayesian learning for decentralized two-armed bandit based decision making with applications to the Goore Game
2012
Published version of an article in the journal: Applied Intelligence. Also available from the publisher at: http://dx.doi.org/10.1007/s10489-012-0346-z The two-armed bandit problem is a classical optimization problem where a decision maker sequentially pulls one of two arms attached to a gambling machine, with each pull resulting in a random reward. The reward distributions are unknown, and thus, one must balance between exploiting existing knowledge about the arms, and obtaining new information. Bandit problems are particularly fascinating because a large class of real world problems, including routing, Quality of Service (QoS) control, game playing, and resource allocation, can be solved …